Partitioning Models for General Medium-Grain Parallel Sparse Tensor Decomposition

نویسندگان

چکیده

The focus of this article is efficient parallelization the canonical polyadic decomposition algorithm utilizing alternating least squares method for sparse tensors on distributed-memory architectures. We propose a hypergraph model general medium-grain partitioning which does not enforce any topological constraint partitioning. proposed based splitting given tensor into nonzero-disjoint component tensors. Then mode-dependent coarse-grain constructed each tensor. A net amalgamation operation to form composite from these hypergraphs correctly encapsulate minimization communication volume. heuristic splits nonzeros dense slices obtain in So we partially attain slice coherency at (sub)slice level since performed (sub)slices instead individual nonzeros. also utilize well-known recursive-bipartitioning framework improve quality heuristic. Finally, tripartite graph with aim faster expense increasing total Parallel experiments conducted 10 real-world up 1024 processors confirm validity and models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hypergraph-Partitioning-Based Decomposition for Parallel Sparse-Matrix Vector Multiplication

ÐIn this work, we show that the standard graph-partitioning-based decomposition of sparse matrices does not reflect the actual communication volume requirement for parallel matrix-vector multiplication. We propose two computational hypergraph models which avoid this crucial deficiency of the graph model. The proposed models reduce the decomposition problem to the well-known hypergraph partition...

متن کامل

Partitioning Sparse Rectangular Matrices for Parallel Processing?

We are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. We will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. We will extend the spectral partitioning method for symmetric m...

متن کامل

Sparse and Low-Rank Tensor Decomposition

Motivated by the problem of robust factorization of a low-rank tensor, we study the question of sparse and low-rank tensor decomposition. We present an efficient computational algorithm that modifies Leurgans’ algoirthm for tensor factorization. Our method relies on a reduction of the problem to sparse and low-rank matrix decomposition via the notion of tensor contraction. We use well-understoo...

متن کامل

BTF Compression via Sparse Tensor Decomposition

In this paper, we present a novel compression technique for Bidirectional Texture Functions based on a sparse tensor decomposition. We apply the K-SVD algorithm along two different modes of a tensor to decompose it into a small dictionary and two sparse tensors. This representation is very compact, allowing for considerably better compression ratios at the same RMS error than possible with curr...

متن کامل

Revisiting Hypergraph Models for Sparse Matrix Partitioning

We provide an exposition of hypergraph models for parallelizing sparse matrix-vector multiplies. Our aim is to emphasize the expressive power of hypergraph models. First, we set forth an elementary hypergraph model for parallel matrix-vector multiply based on one-dimensional (1D) matrix partitioning. In the elementary model, the vertices represent the data of a matrix-vector multiply, and the n...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Parallel and Distributed Systems

سال: 2021

ISSN: ['1045-9219', '1558-2183', '2161-9883']

DOI: https://doi.org/10.1109/tpds.2020.3012624